43 research outputs found

    An Unsupervised Algorithm for Change Detection in Hyperspectral Remote Sensing Data Using Synthetically Fused Images and Derivative Spectral Profiles

    Get PDF
    Multitemporal hyperspectral remote sensing data have the potential to detect altered areas on the earth’s surface. However, dissimilar radiometric and geometric properties between the multitemporal data due to the acquisition time or position of the sensors should be resolved to enable hyperspectral imagery for detecting changes in natural and human-impacted areas. In addition, data noise in the hyperspectral imagery spectrum decreases the change-detection accuracy when general change-detection algorithms are applied to hyperspectral images. To address these problems, we present an unsupervised change-detection algorithm based on statistical analyses of spectral profiles; the profiles are generated from a synthetic image fusion method for multitemporal hyperspectral images. This method aims to minimize the noise between the spectra corresponding to the locations of identical positions by increasing the change-detection rate and decreasing the false-alarm rate without reducing the dimensionality of the original hyperspectral data. Using a quantitative comparison of an actual dataset acquired by airborne hyperspectral sensors, we demonstrate that the proposed method provides superb change-detection results relative to the state-of-the-art unsupervised change-detection algorithms

    Automated Ortho-Rectification of UAV-Based Hyperspectral Data over an Agricultural Field Using Frame RGB Imagery

    Get PDF
    Low-cost Unmanned Airborne Vehicles (UAVs) equipped with consumer-grade imaging systems have emerged as a potential remote sensing platform that could satisfy the needs of a wide range of civilian applications. Among these applications, UAV-based agricultural mapping and monitoring have attracted significant attention from both the research and professional communities. The interest in UAV-based remote sensing for agricultural management is motivated by the need to maximize crop yield. Remote sensing-based crop yield prediction and estimation are primarily based on imaging systems with different spectral coverage and resolution (e.g., RGB and hyperspectral imaging systems). Due to the data volume, RGB imaging is based on frame cameras, while hyperspectral sensors are primarily push-broom scanners. To cope with the limited endurance and payload constraints of low-cost UAVs, the agricultural research and professional communities have to rely on consumer-grade and light-weight sensors. However, the geometric fidelity of derived information from push-broom hyperspectral scanners is quite sensitive to the available position and orientation established through a direct geo-referencing unit onboard the imaging platform (i.e., an integrated Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS). This paper presents an automated framework for the integration of frame RGB images, push-broom hyperspectral scanner data and consumer-grade GNSS/INS navigation data for accurate geometric rectification of the hyperspectral scenes. The approach relies on utilizing the navigation data, together with a modified Speeded-Up Robust Feature (SURF) detector and descriptor, for automating the identification of conjugate features in the RGB and hyperspectral imagery. The SURF modification takes into consideration the available direct geo-referencing information to improve the reliability of the matching procedure in the presence of repetitive texture within a mechanized agricultural field. Identified features are then used to improve the geometric fidelity of the previously ortho-rectified hyperspectral data. Experimental results from two real datasets show that the geometric rectification of the hyperspectral data was improved by almost one order of magnitude

    OBJECT-BASED CLASSIFICATION OF AN URBAN AREA THROUGH A COMBINATION OF AERIAL IMAGE AND AIRBORNE LIDAR DATA

    Get PDF
    ABSTRACT This paper studies the effect of airborne elevation information on the classification of an aerial image in an urban area. In an urban area, it is difficult to classify buildings relying solely on the spectral information obtained from aerial images because urban buildings possess a variety of roof colors. Therefore, combining Lidar data with aerial images overcomes the difficulties encountered with regard to the heterogeneous appearance of buildings. In the first stage of this process, building information is obtained and is extracted using the normalized Digital Surface Model, return information derived from the airborne Lidar data, and vegetation information obtained through preclassification. In the second stage of this process, the aerial image is segmented into objects. It is then overlaid with building information extracted from the first step in the process. By applying the definite rule to the resulting image, it is possible to determine whether or not the object is a building. In the final stage, the aerial image is classified by using the building object as ancillary data extracted from the prior stage. This classification procedure uses elevation and intensity information obtained from the Lidar data, as well as the red, green, and blue bands obtained from the aerial image. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy and improved classification, especially with regard to building objects, than results that rely solely on an aerial image

    Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery

    No full text
    For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor’s off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values

    Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery

    No full text
    For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor’s off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values

    Segmentation-Based Fine Registration of Very High Resolution Multitemporal Images

    No full text
    In this paper, a segmentation-based approach to fine registration of multispectral and multitemporal very high resolution (VHR) images is proposed. The proposed approach aims at estimating and correcting the residual local misalignment [also referred to as registration noise (RN)] that often affects multitemporal VHR images even after standard registration. The method extracts automatically a set of object representative points associated with regions with homogeneous spectral properties (i.e., objects in the scene). Such points result to be distributed all over the considered scene and account for the high spatial correlation of pixels in VHR images. Then, it estimates the amount and direction of residual local misalignment for each object representative point by exploiting residual local misalignment properties in a multiple displacement analysis framework. To this end, a multiscale differential analysis of the multispectral difference image is employed for modeling the statistical distribution of pixels affected by residual misalignment (i.e., RN pixels) and detect them. The RN is used to perform a segmentation-based fine registration based on both temporal and spatial correlation. Accordingly, the method is particularly suitable to be used for images with a large number of border regions like VHR images of urban scenes. Experimental results obtained on both simulated and real multitemporal VHR images confirm the effectiveness of the proposed method

    Edge-Based Registration-Noise Estimation in VHR Multitemporal and Multisensor Images

    No full text
    Even after co-registration, Very High Resolution (VHR) multitemporal images acquired by different multispectral sensors (e.g., QuickBird, WordView) show a residual misregistration due to dissimilarities in acquisition conditions and in sensor properties. Residual misregistration can be considered as a source of noise and is referred to as Registration Noise (RN). Since RN is likely to have a negative impact on multitemporal information extraction, detecting and reducing it can increase multitemporal image processing accuracy. In this paper, we propose an approach to identify RN between VHR multitemporal and multisensor images. Under the assumption that dominant RN mainly exists along boundaries of objects, we propose to use edge information in high frequency regions to estimate it. This choice makes RN detection less dependent on radiometric differences and thus more effective in VHR multisensor image processing. In order to validate the effectiveness of the proposed approach, multitemporal multisensor datasets are built including QuickBird and WorldView VHR images. Both qualitative and quantitative assessments demonstrate the effectiveness of the proposed RN identification approach compared to the state-of-the-art one
    corecore